-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zstd bomb fix #937
base: main
Are you sure you want to change the base?
Zstd bomb fix #937
Conversation
bcs::from_bytes(decompressed.as_slice()).context("failed to deserialize blob")?; | ||
// decompress the blob and deserialize the data with bcs | ||
let decoder = zstd::Decoder::new(blob.data.as_slice())?; | ||
let blob = bcs::from_reader(decoder).context("failed to deserialize blob")?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know how fill_buf
would be called here? I wondering if this in fact going to raise an Error
or whether it might just panic if the buffer size is restricted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think I fully understand this question, and we didn't restrict the size of the internal buffer.
But you did bring to my attention that the slice also supports BufRead
and we can take advantage of it, bypassing the intermediate buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, bcs would return an Error
.
I have implemented a customizable limit on length of any variable-length fields in zefchain/bcs#12.
Use streamed decompression API to avoid potentially huge allocations for blob data decompressed from zstd. The bcs decoder enforces the length limit of 2^31 - 1 on byte arrays.
16ea004
to
b2fd870
Compare
Avoid an intermediate buffer when serializing and compressing for CelestiaBlob, plugging the encoder into the serializer.
The data slice is already BufRead, just use it directly.
Use a crafted zstd payload as submitted in #876 (comment)
Need this to enable length limit enforcement on deserialization of blobs.
Fixes #876 (but see outstanding issues below).
Dependencies
movementlabsxyz/aptos-core#111
Summary
protocol-units
.Use streamed decompression API to avoid potentially huge allocations
for blob data decompressed from zstd. The bcs decoder enforces the
length limit of 2^31 - 1 on byte arrays.
Changelog
Testing
Added unit tests to movement-celestia-da-util to exercise compression and decompression,
with supercritical and valid lengths for compressed payload and the data field. The test creating legit data structures with super-long blobs is ignored in default runs because of the memory and processing requirements.
Outstanding issues
As commented in #876 (comment), there are four byte array fields in the blob data structure that can each be bloated up to 2 GiB.